# Encoder-decoder architecture

T0 3B
Apache-2.0
T0++ is a natural language processing model based on the T5 architecture, achieving zero-shot task generalization through multi-task prompt training, outperforming GPT-3 on various NLP tasks while being more compact.
Large Language Model Transformers English
T
bigscience
3,723
100
TURNA GGUF
Other
TURNA is a Turkish encoder-decoder language model focused on enhancing understanding and generation capabilities.
Large Language Model Transformers
T
helizac
159
3
Comprehend It Multilingual T5 Base
Apache-2.0
A multilingual zero-shot classification model based on mT5-base, supporting bidirectional text classification in nearly 100 languages.
Text Classification Transformers Supports Multiple Languages
C
knowledgator
420
25
Pile T5 Large
Pile-T5 Large is an encoder-decoder model trained on The Pile dataset based on the T5x library, primarily used for English text-to-text generation tasks.
Large Language Model Transformers English
P
EleutherAI
112
15
T5 Small Wikilingua Vietnamese
MIT
State-of-the-art lightweight pretrained model for Vietnamese based on Transformer encoder-decoder architecture, specialized for text summarization tasks.
Text Generation Transformers Other
T
minhtoan
43
3
Vit5 Large
MIT
Advanced pre-trained encoder-decoder model for Vietnamese based on Transformer architecture
Large Language Model Other
V
VietAI
1,444
5
Rut5 Base
ruT5-base is a Russian text-to-text generation model developed by SberDevices, based on the T5 architecture with 222 million parameters and trained on 300GB of data.
Large Language Model Transformers Other
R
ai-forever
5,946
18
T0pp
Apache-2.0
T0pp is an 11-billion-parameter encoder-decoder model based on the T5 architecture, excelling in zero-shot task generalization with English natural language prompts, outperforming GPT-3 while being more compact.
Large Language Model Transformers English
T
bigscience
7,426
401
Roberta2roberta L 24 Wikisplit
Apache-2.0
This is an encoder-decoder model based on the RoBERTa architecture, specifically fine-tuned for sentence splitting tasks.
Text Generation Transformers English
R
google
16
8
T0
Apache-2.0
T0++ is an encoder-decoder model that demonstrates zero-shot task generalization capabilities on English natural language prompts, outperforming GPT-3 on many tasks while being 16 times smaller in size.
Large Language Model Transformers English
T
bigscience
2,560
83
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase